iT邦幫忙

2019 iT 邦幫忙鐵人賽

DAY 22
1
Kubernetes

在地端建置Angular+ASP.NET Core的DevOps環境系列 第 22

day22_k8s08_Auto Scaling,ReplicaSet,Daemon Set,Proxy,StatefulSet

  • 分享至 

  • xImage
  •  

前言

哇哈哈,倒數8天囉,看著比較早開賽的大大門陸續都30天了~
小弟也想早點完賽說~
有參賽的朋友們大家都加油囉~

今天是各種Set,
k8s到day24為止(明、後天)
day25~day29又回頭介紹ansible

Scaling Applications

有彈性,可擴展的
在定票系統,遇到不同明星的演唱會,像五月天要開演會,replicas就要設大一點
要看你的Application是Stateful 或者 Stateless
如果是Stateless就比較簡單

Replica

在Deployment設定replica (建議)
定義ReplicaSet
Bare Pods : 直接用PodSpec來建立的Pod,這些Pod在Node重啟後不會自動啟用,不建議使用
Job : 在Node重啟後,會建立新Pod繼續跑
DaemonSet : 調度某些pod死終服務這些Node,若Node離開cluster,被DaemonSet調度的pods都會被刪除

參考:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

  • controllers/nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.4
        ports:
        - containerPort: 80
$ kubectl scale --replicas=4 controllers/nginx-deployment.yaml

$ kubectl get deployments # 看DESIRED跟CURRENT是否變成4
  • 定義一個NodePort service給nginx pod
$ kubectl expose deplyment nginx-deployment --type=NodePort

# 可以改成用LoadBalancer service取代,讓負載分到各pod 
$ kubectl expose deplyment nginx-deployment --type=LoadBalancer --port=8080 --target-port=8080 --name nginx-load-balancer

# 看LoadBalancer是否有作用,可以觀查 客戶的ip 是否被分給各 pod
$ kubectl describe services nginx-load-balancer

===

kubectl autoscal

https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#autoscale

# pod數1~10個,全部pod的cpu使用率平均50%
$ kubectl autoscale deployment wordpress --cpu-percent=50 --min=1 --max=10

# 當然也可以寫在yaml檔
$ kubectl apply -f ./wordpress-deployment.yaml

# 模擬負載
$ kubectl run -i --tty load-generator --image=busybox /bin/sh

Hit enter for command prompt
$ while true; do wget -q -O- http://wordpress.default.svc.cluster.local; done

# 檢查HPA狀態
(Horizontal Pod Autoscaler,自動維持,所設定的副本數量、cpu使用量)
$ kubectl get hpa

===

automatically scale pods based on metrics

自動擴展Deployment,Replication Controller,ReplicaSet

  • autoscaling會定期query目標pods
    預設是30sec
    -使用heapster(要先裝)

hpa-example.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: hpa-example
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: hpa-example
    spec:
      containers:
      - name: hpa-example
        image: gcr.io/google_containers/hpa-example
        ports:
        - name: http-port
          containerPort: 80
        resources:
          requests: # 要求的cpu資源
            cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
  name: hpa-example
spec:
  ports:
  - port: 31001
    nodePort: 31001
    targetPort: http-port
    protocol: TCP
  selector:
    app: hpa-example
  type: NodePort
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler # 水平擴展
metadata:
  name: hpa-example-autoscaler
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: hpa-example
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50 # 目標pod的cpu使用率50%
$ kubectl create -f hpa.yaml
$ kubectl get hpa
$ kubectl run -it load-generator --image=busybox /bin/sh

# 在busybox模擬負載
while true; do wget -q -0- http://hpa-example.default.svc.cluster.local:31001; done

# 再開一個terminal,$ kubectl get pod # 觀察pod是否增加

===

Daemon Sets

Daemon Sets用來確保pod resource(pod副本)在(1~n個)node上運行
當node被加到cluster,新的pod會自動開始
當node被移除,pod不會被重新調用(be rescheduled)到其他node

Daemon Set與Replica Set很像

  • Daemon Set:pod副本要在所有或單一的node上運行
    除非使用nodeSelector,否則預設會在每個node上建立pod副本
  • Replica Set:想要application與node分離(解耦合)

適用案例:

  • 常駐程式
  • Logging aggregators(log收集)
  • Monitoring(監控的agent)
  • Load Balancers/Reverse Proxies/API Gateways
  • 希望每個node只跑1個pod
  • 在整個cluster部署service

參考
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

建立daemon set

會在cluster裡的node,建立一個fluentd的log agent

  • daemonset.yaml
apiVersion: apps/v1

kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: k8s.gcr.io/fluentd-elasticsearch:1.20
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
$ kubectl create -f https://k8s.io/examples/controllers/daemonset.yaml

# 查詢狀態
$ kubectl describe daemonset fluentd-elasticsearch

# 查pods跟node
$ kubectl get pods -o wide

# 刪除
$ kubectl delete -f daemonset.yaml

===

Proxy(利用daemonSets)

  • 參考書本「Kubernetes建置與執行」第7章服務探索

kubernetes cluster是由許多元件組起來的,
這些基本元件跑在namespace kube-system上面

kubernetes的proxy負責將流量路由到cluster裡面的load balancer
看起來像這樣

proxy # 所以他在LB的更外層
 |
 V
load balancer

事實上確是每個NODE都要跑一個proxy instance
現行很多cluster用DaemonSet的API Object來做這個工作
如果你的cluster是用DaemonSet來當kubernetes proxy,可以用指令查

$ kubectl get daemonSets --namespace=kube-system kube-proxy

NAME        DESIRE  CURRENT  READY  NODE-SELECTOR  AGE
kube-proxy  ...略 

===

Stateful Set

參考:
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
https://jimmysong.io/kubernetes-handbook/concepts/statefulset.html

對於k8s管理stateful applications,是一個很復雜的議題

Deployments和ReplicaSets適合用於stateless service

StatefulSet的特色

  • StatefulSet的Pod的Domain Name的格式:
    statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local
    Headless Servic和StatefulSet要在相同的namespace
    .cluster.local是cluster domain

  • StatefulSet 可用 Headless Service 來控制 Pod 的domain
    格式為 $(service's name).$(namespace).svc.cluster.local

  • storage使用volumeClaimTemplates,來達到持繼儲存

  • 使用Headless Service來達到Pod的PodName和HostName不變

註:Headless Service:沒有Cluster IP的Service

使用StatefulSet部署一個簡單的web application

  • create a StatefulSet
  • 使用StatefulSet管理它的Pod
  • 刪除StatefulSet
  • 擴展(scale) StatefulSet
  • update StatefulSet's Pod

Creating a StatefulSet

  • application/web/web.yaml
apiVersion: v1
kind: Service # headless service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None # headless service沒有cluster ip
  selector:
    app: nginx
--- # 下面是stateful set
apiVersion: apps/v1
kind: StatefulSet # 這一行
metadata:
  name: web # StatefulSet's Name
spec:
  serviceName: "nginx" # service的名稱
  replicas: 2 
  selector:
    matchLabels:
      app: nginx # 要用的pod,使用label找
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: k8s.gcr.io/nginx-slim:0.8
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates: # pod重新調度,storage的資料還在
# volumeClaimTemplates使用PersistentVolume Provisionerd的PersistentVolumes
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
$ kubectl create -f web.yaml 
service/nginx created # 建service
statefulset.apps/web created # 建stateful set

分別看pod跟statefulset

$ kubectl get service nginx
NAME      TYPE         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
nginx     ClusterIP    None         <none>        80/TCP    12s

$ kubectl get statefulset web
NAME      DESIRED   CURRENT   AGE
web       2         1         20s

上一篇
day21_k8s07_Pod Preset,RBAC,helm,ingress
下一篇
day23_k8s09_API Service,Serverless,istio安裝,DNS&Service Discovery
系列文
在地端建置Angular+ASP.NET Core的DevOps環境31
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言